12 research outputs found
Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
This work addresses the problem of semantic scene understanding under dense
fog. Although considerable progress has been made in semantic scene
understanding, it is mainly related to clear-weather scenes. Extending
recognition methods to adverse weather conditions such as fog is crucial for
outdoor applications. In this paper, we propose a novel method, named
Curriculum Model Adaptation (CMAda), which gradually adapts a semantic
segmentation model from light synthetic fog to dense real fog in multiple
steps, using both synthetic and real foggy data. In addition, we present three
other main stand-alone contributions: 1) a novel method to add synthetic fog to
real, clear-weather scenes using semantic input; 2) a new fog density
estimator; 3) the Foggy Zurich dataset comprising real foggy images,
with pixel-level semantic annotations for images with dense fog. Our
experiments show that 1) our fog simulation slightly outperforms a
state-of-the-art competing simulation with respect to the task of semantic
foggy scene understanding (SFSU); 2) CMAda improves the performance of
state-of-the-art models for SFSU significantly by leveraging unlabeled real
foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201
Non-resonant dot-cavity coupling and its applications in resonant quantum dot spectroscopy
We present experimental investigations on the non-resonant dot-cavity
coupling of a single quantum dot inside a micro-pillar where the dot has been
resonantly excited in the s-shell, thereby avoiding the generation of
additional charges in the QD and its surrounding. As a direct proof of the pure
single dot-cavity system, strong photon anti-bunching is consistently observed
in the autocorrelation functions of the QD and the mode emission, as well as in
the cross-correlation function between the dot and mode signals. Strong Stokes
and anti-Stokes-like emission is observed for energetic QD-mode detunings of up
to ~100 times the QD linewidth. Furthermore, we demonstrate that non-resonant
dot-cavity coupling can be utilized to directly monitor and study relevant QD
s-shell properties like fine-structure splittings, emission saturation and
power broadening, as well as photon statistics with negligible background
contributions. Our results open a new perspective on the understanding and
implementation of dot-cavity systems for single-photon sources, single and
multiple quantum dot lasers, semiconductor cavity quantum electrodynamics, and
their implementation, e.g. in quantum information technology.Comment: 17 pages, 4 figure
I-HAZE: a dehazing benchmark with real hazy and haze-free indoor images
Image dehazing has become an important computational imaging topic in the
recent years. However, due to the lack of ground truth images, the comparison
of dehazing methods is not straightforward, nor objective. To overcome this
issue we introduce a new dataset -named I-HAZE- that contains 35 image pairs of
hazy and corresponding haze-free (ground-truth) indoor images. Different from
most of the existing dehazing databases, hazy images have been generated using
real haze produced by a professional haze machine. For easy color calibration
and improved assessment of dehazing algorithms, each scene include a MacBeth
color checker. Moreover, since the images are captured in a controlled
environment, both haze-free and hazy images are captured under the same
illumination conditions. This represents an important advantage of the I-HAZE
dataset that allows us to objectively compare the existing image dehazing
techniques using traditional image quality metrics such as PSNR and SSIM
Interaction Between Real and Virtual Humans: Playing Checkers
. For some years, we have been able to integrate virtual humans into virtual environments. As the demand for Augmented Reality systems grows, so will the need for these synthetic humans to coexist and interact with humans who live in the real world. In this paper, we use the example of a checkers game between a real and a virtual human to demonstrate the integration of techniques required to achieve a realistic-looking interaction in real-time. We do not use cumbersome devices such as a magnetic motion capture system. Instead, we rely on purely image-based techniques to address the registration issue, when the camera or the objects move, and to drive the virtual human's behavior. 1 Introduction Recent developments in Virtual Reality and Human Animation have led to the integration of Virtual Humans into synthetic environments. We can now interact with them and represent ourselves as avatars in the Virtual World [4]. Fast workstations make it feasible to animate them in realtime..
Physically Plausible Dehazing for Non-physical Dehazing Algorithms
Images affected by haze usually present faded colours and loss of contrast, hindering the precision of methods devised for clear images. For this reason, image dehazing is a crucial pre-processing step for applications such as self-driving vehicles or tracking. Some of the most successful dehazing methods in the literature do not follow any physical model and are just based on either image enhancement or image fusion. In this paper, we present a procedure to allow these methods to accomplish the Koschmieder physical model, i.e., to force them to have a unique transmission for all the channels, instead of the per-channel transmission they obtain. Our method is based on coupling the results obtained for each of the three colour channels. It improves the results of the original methods both quantitatively using image metrics, and subjectively via a psychophysical test. It especially helps in terms of avoiding over-saturation and reducing colour artefacts, which are the most common complications faced by image dehazing methods
Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 3808 real foggy images, with pixel-level semantic annotations for 16 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code will be made publicly available.ISSN:0302-9743ISSN:1611-334